Today's software is bloated leading to significant resource wastage. This bloat is prevalent across the entire software stack, from the operating system, all the way to software backends, frontends, and web-pages. In this paper, we study how prevalent bloat is in machine learning containers. We develop MMLB, a framework to analyze bloat in machine learning containers, measuring the amount of bloat that exists on the container and package levels. Our tool quantifies the sources of bloat and removes them. We integrate our tool with vulnerability analysis tools to measure how bloat affects container vulnerabilities. We experimentally study 15 machine learning containers from the official Tensorflow, Pytorch, and NVIDIA container registries under different tasks, (i.e., training, tuning, and serving). Our findings show that machine learning containers contain bloat encompassing up to 80\% of the container size. We find that debloating machine learning containers speeds provisioning times by up to $3.7\times$ and removes up to 98\% of all vulnerabilities detected by vulnerability analysis tools such as Grype. Finally, we relate our results to the larger discussion about technical debt in machine learning systems.
translated by 谷歌翻译
物理引导的神经网络(PGNNS)代表了使用物理引导(PG)丢失功能(捕获具有已知物理学中的网络输出中的违规)培训的新出现类的神经网络,以及数据中包含的监督。 PGNN中的现有工作表明,使用恒定的折衷参数,在神经网络目标中添加单个PG损耗功能的功效,以确保更好的普遍性。然而,在具有竞争梯度方向的多个PG函数的存在中,需要自适应地调谐在训练过程中不同的PG损耗功能的贡献,以获得更广泛的解决方案。我们展示了在求解基于物理学的特征值方程的最低(或最高)特征向量的通用神经网络问题中竞争PG损失的存在,这在许多科学问题中通常遇到。我们提出了一种新的方法来处理竞争PG损失,并在量子力学和电磁繁殖中的两个激励应用中展示其在学习普遍解决方案中的功效。这项工作中使用的所有代码和数据都可以在https://github.com/jayroxis/cophy-pgnn获得。
translated by 谷歌翻译